COMP2111
System Modelling and Design
Term 1, 2024

Notes (Week 9 Wednesday)

These are the notes I wrote during the lecture.

We've done:
  - the specification language (Hoare triples)
  - the inference systems for proving specs (Hoare logic)
  - the semantics of Hoare triples (denotational semantics of 𝓛)

  ⟦P⟧   :  the semantics of P
           ...which is a set of pairs of states.

   (s1,s2) ∈ ⟦P⟧
     means
   If we execute the program P starting from the state s1,
    then s2 can terminate in the state s2.


    ⟦P;Q⟧ = ⟦P⟧;⟦Q⟧
    ⟦if b then P else Q fi⟧ = ⟦b; P⟧ ∪ ⟦¬b; Q⟧

     (we think of a predicate b as a program that forces b to be true,
      similar to "assert b")

    ⟦while b do P od⟧ = ⟦b; P⟧*; ⟦¬b⟧


    (η,η') ∈ ⟦x := e⟧     iff   η' = η[x := ⟦e⟧η]

^ Using the semantics of 𝓛 above, we can define a semantics of
  Hoare triples:

   Syntactic definition
    ⊢ {φ} P {ψ}    means "we can prove this Hoare triple using the rules"

   Semantic definition
    ⊧ {φ} P {ψ}  holds iff    ⟦P⟧(⟨φ⟩) ⊆ ⟨Ψ⟩

Using this we can talk about soundness and completeness for Hoare logic:
  are the Hoare rules the right rules for proving Hoare triples?

*Soundness* (we can only prove true things in Hoare logic)

 If ⊢ {φ} P {ψ} then ⊧ {φ} P {ψ}
 "If we've (syntactically) proven a Hoare triple,
  it's a (semantically) valid Hoare triple"

If this was COMP4161, we would now do a full formal proof of this
proposition. But there is one in the slides you can read if you want.

The proof is an induction on derivation of the judgement ⊢ {φ} P {ψ}.

You'd need to prove one case for every inference rule of Hoare logic.

*Completeness* (we can prove all the valid Hoare triples in Hoare logic)


 If ⊧ {φ} P {ψ} then ⊢ {φ} P {ψ} *

 ^ An induction on P (slightly harder, or much harder, proof depending
 on setting).

 Usually, this result only holds if you add some asterisks
 Therefore it's usually called "relative completeness".
 This happens because of the rule of consequence.

   φ ⇒ φ'   {φ'} P {ψ'}    ψ' ⇒ ψ
  _______________________________ (conseq)
            {φ} P {ψ}

 The problematic thing is the implications.  Hoare logic doesn't
 specify how exactly you're supposed to prove the implications.

 Therefore, you need an asterisk on the theorem statement as a
  concession to the possibility that some implications are unprovable
  in your system.

 * Given an oracle that can decide the validity of implications.

We've done:
  - the specification language (Hoare triples)
  - the inference systems for proving specs (Hoare logic)
  - the semantics of Hoare triples (denotational semantics of 𝓛)
We're going to do (at least)
  - accounting for termination (total correctness Hoare logic)
  - accounting for nondeterminism (𝓛+)


The square brack Hoare triples are widely understood

  {φ} P {ψ}    <--- everyone's seen this :)


  [φ] P [ψ]    <--- highly idiosyncratic notation
                    not necessarily used outside the course


    x := X;
    y := 0;
    while(x ≠ 0) do
      x := x - 1;
      y := y + 1
    od


  [ 0 ≤ X ]
    [ X+0 = X  ∧ 0 ≤ X]    
    x := X;
    [ x+0 = X  ∧ 0 ≤ x]
    y := 0;
    [ x+y = X ∧ 0 ≤ x]
    while(x ≠ 0) do
      [ x+y = X ∧ x ≠ 0 ∧ 0 ≤ x ∧ x = N ]

      [ (x-1)+(y+1) = X ∧ 0 ≤ x-1 ∧ x-1 < N ]        
      x := x - 1;
      [ x+(y+1) = X ∧ 0 ≤ x ∧  x < N ]
      y := y + 1
      [ x+y = X ∧ 0 ≤ x ∧ x < N ]
    od
    [ x+y = X ∧ ¬(x ≠ 0) ]
  [ x = 0 ∧ y = X ]

We also need to prove positivity on the side:

   x+y = X ∧ 0 ≤ x ∧ x ≠ 0  ⇒  x > 0

^ We'll need think of a *loop variant* for our loop.
   An expression in terms of the variables that
   - decreases with every loop iteration
   - cannot be 0 if the loop guard is true

    v   :=    x

- The loop invariant doesn't have to be just a program variable
  it can be a more complicated expression.

  E.g. here we could also choose   X-y

- Usually, a variant suffices, but in some exotic cases you need to
  use a well-founded order instead of a variant (c.f. transition
  systems lectures)


We're going to use non-determinism
  *as part of our programming language*
  to model two phenomena:
  - underspecification:
        you want to write a pseudo-program
        that gives some freedom to the implementor

     quicksort(xs):
       choose a pivot x ∈ xs;
       ys := everything < x in xs;
       zs := everything ≥ x in xs;
       return(quicksort(ys) + quicksort(zs))
                            ^ + denotes concatenation

     No matter how I implement "choose a pivot"
     in my program, the algorithm is still correct.

     Therefore: we want to be able to prove quicksort correct without
     hard-coding the pivot selection (because the list becomes sorted
     either way)

     We can't do that in 𝓛 because in 𝓛 everything is deterministic.

  - magic

     Sometimes, you want to specify a program in the abstract
     such that you say what result you want, but not how to achieve it.

     This is very similar to "assert" statements.
     So far we've thought of "assert b" (or just b),
      to mean "crash unless b holds"

     We can also think of it like:
       "an angel descends from the heavens and
        a miracle happens to make b true"
       ^ this leaves another kind of hole in the program
         instead of giving you a menu of options you need to resolve
         to implement the program (underspecification)
         give you a problem that you need to somehow find an algorithm to
         to implement in order to write the program.

TL;DR is:
  non-determinism is not randomness,
   it's underspecification

 𝓛+  :=    x := e (assign)
       |    P; Q   (seq)
       |    φ      (miracle)
       |    P + Q  (choice)
       |    P*     (repetition)

We've tossed out
  if-then-else
  and while
because they can be seen as syntactic sugar

  if b then P else Q fi
  =
  (b; P) + (¬b; Q)

  while b do P od
  =
  (b; P)*; ¬b


We somehow have half an hour:

We could do:

 1. Sneak peek Monday's topic (λ-calculus)
 2. Catch up on quiz reviews
 3. Actually look at the Hoare logic soundness proof
 4. Assignment Q&A
 5. Go about our days


A model of computation is *Turing complete*
 if it can represent every computable function.

 A function is *computable* if we can write a Turing
 machine that implements it.

 The neat thing about Turing machines:
  nobody has come up with a sensible model of computation
  that can compute more functions than a Turing machine.

 Some can compute fewer (DFA:s are strictly weaker)

 Almost all that actually try can describe exactly the same
  computable functions as the Turing machine.

2024-04-19 Fri 10:38

Announcements RSS